ip address
Windscribe review: Despite the annoyances, it has the right idea
The first step is always to figure out how easy or hard the VPN is to use. Windscribe and other VPNs are important tools, but you'll never use them if the UI gets in the way. I tested Windscribe's desktop apps on Windows and Mac, its mobile apps on iOS and Android and its Chrome and Firefox browser extensions. To start with, let me say that installing Windscribe is a breeze no matter where you do it. The downloaders and installers handle their own business, only requiring you to grant a few permissions. The apps arrive on your system ready to use out of the box.
- Europe (1.00)
- Asia (1.00)
- North America > United States (0.68)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Media (0.96)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Communications > Mobile (0.89)
- (2 more...)
Understanding Privacy Risks in Code Models Through Training Dynamics: A Causal Approach
Yang, Hua, Velasco, Alejandro, Fang, Sen, Xu, Bowen, Poshyvanyk, Denys
Large language models for code (LLM4Code) have greatly improved developer productivity but also raise privacy concerns due to their reliance on open-source repositories containing abundant personally identifiable information (PII). Prior work shows that commercial models can reproduce sensitive PII, yet existing studies largely treat PII as a single category and overlook the heterogeneous risks among different types. We investigate whether distinct PII types vary in their likelihood of being learned and leaked by LLM4Code, and whether this relationship is causal. Our methodology includes building a dataset with diverse PII types, fine-tuning representative models of different scales, computing training dynamics on real PII data, and formulating a structural causal model to estimate the causal effect of learnability on leakage. Results show that leakage risks differ substantially across PII types and correlate with their training dynamics: easy-to-learn instances such as IP addresses exhibit higher leakage, while harder types such as keys and passwords leak less frequently. Ambiguous types show mixed behaviors. This work provides the first causal evidence that leakage risks are type-dependent and offers guidance for developing type-aware and learnability-aware defenses for LLM4Code.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Unintentional Consequences: Generative AI Use for Cybercrime
Luu, Truong Jack, Samuel, Binny M.
The democratization of generative AI introduces new forms of human-AI interaction and raises urgent safety, ethical, and cybersecurity concerns. We develop a socio-technical explanation for how generative AI enables and scales cybercrime. Drawing on affordance theory and technological amplification, we argue that generative AI systems create new action possibilities for cybercriminals and magnify pre-existing malicious intent by lowering expertise barriers and increasing attack efficiency. To illustrate this framework, we conduct interrupted time series analyses of two large datasets: (1) 464,190,074 malicious IP address reports from AbuseIPDB, and (2) 281,115 cryptocurrency scam reports from Chainabuse. Using November 30, 2022, as a high-salience public-access shock, we estimate the counterfactual trajectory of reported cyber abuse absent the release, providing an early-warning impact assessment of a general-purpose AI technology. Across both datasets, we observe statistically significant post-intervention increases in reported malicious activity, including an immediate increase of over 1.12 million weekly malicious IP reports and about 722 weekly cryptocurrency scam reports, with sustained growth in the latter. We discuss implications for AI governance, platform-level regulation, and cyber resilience, emphasizing the need for multi-layer socio-technical strategies that help key stakeholders maximize AI's benefits while mitigating its growing cybercrime risks.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.67)
Rise of the 'porno-trolls': how one porn platform made millions suing its viewers
Rise of the'porno-trolls': how one porn platform made millions suing its viewers Instead, it was a subpoena. He had been sued in federal court for illegally downloading 80 movies. Some of the titles sounded cryptic - Do Not Worry, We Are Only Friends - or banal, like International Relations Part 2. Others were less subtle: He Loved My Big Ass, He Loved My Big Butt, and My Big Booty Loves Anal. Brown, who had spent decades investigating sex crimes, claimed he had never watched any of them. His years "dealing with pimping", he wrote in a court filing, left him "with no interest in pornography". He had been married for 40 years, he did not need to download Hot Wife, another title in the list.
- Oceania > Australia (0.04)
- North America > United States > Texas (0.04)
- North America > United States > Oregon (0.04)
- (9 more...)
- Media > Film (1.00)
- Law > Litigation (1.00)
- Law > Intellectual Property & Technology Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
Using Salient Object Detection to Identify Manipulative Cookie Banners that Circumvent GDPR
Grossman, Riley, Smith, Michael, Borcea, Cristian, Chen, Yi
The main goal of this paper is to study how often cookie banners that comply with the General Data Protection Regulation (GDPR) contain aesthetic manipulation, a design tactic to draw users' attention to the button that permits personal data sharing. As a byproduct of this goal, we also evaluate how frequently the banners comply with GDPR and the recommendations of national data protection authorities regarding banner designs. We visited 2,579 websites and identified the type of cookie banner implemented. Although 45% of the relevant websites have fully compliant banners, we found aesthetic manipulation on 38% of the compliant banners. Unlike prior studies of aesthetic manipulation, we use a computer vision model for salient object detection to measure how salient (i.e., attention-drawing) each banner element is. This enables the discovery of new types of aesthetic manipulation (e.g., button placement), and leads us to conclude that aesthetic manipulation is more common than previously reported (38% vs 27% of banners). To study the effects of user and/or website location on cookie banner design, we include websites within the European Union (EU), where privacy regulation enforcement is more stringent, and websites outside the EU. We visited websites from IP addresses in the EU and from IP addresses in the United States (US). We find that 13.9% of EU websites change their banner design when the user is from the US, and EU websites are roughly 48.3% more likely to use aesthetic manipulation than non-EU websites, highlighting their innovative responses to privacy regulation.
- Europe > United Kingdom (0.04)
- North America > United States > New Jersey > Essex County > Newark (0.04)
- Europe > Ukraine > Kyiv Oblast > Kyiv (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.48)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
Meta Claims Downloaded Porn at Center of AI Lawsuit Was for 'Personal Use'
Meta Claims Downloaded Porn at Center of AI Lawsuit Was for'Personal Use' In a motion to dismiss filed earlier this week, Meta denied claims that employees had downloaded pornography from Strike 3 Holdings to train its artificial intelligence models. This week, Meta asked a US district court to toss a lawsuit alleging that the tech giant illegally torrented pornography to train AI . The move comes after Strike 3 Holdings discovered illegal downloads of some of its adult films on Meta corporate IP addresses, as well as other downloads that Meta allegedly concealed using a "stealth network" of 2,500 "hidden IP addresses." Accusing Meta of stealing porn to secretly train an unannounced adult version of its AI model powering Movie Gen, Strike 3 sought damages that could have exceeded $350 million, TorrentFreak reported . Strike 3 also cited "no facts to suggest that Meta has ever trained an AI model on adult images or video, much less intentionally so," Meta claimed.
- North America > United States > California (0.15)
- North America > United States > New York (0.06)
- North America > United States > Texas (0.05)
- (4 more...)
- Law > Litigation (1.00)
- Government > Regional Government (0.96)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Networks (0.58)
You Have Been LaTeXpOsEd: A Systematic Analysis of Information Leakage in Preprint Archives Using Large Language Models
Dubniczky, Richard A., Borsos, Bertalan, Norbert, Tihanyi
The widespread use of preprint repositories such as arXiv has accelerated the communication of scientific results but also introduced overlooked security risks. Beyond PDFs, these platforms provide unrestricted access to original source materials, including LaTeX sources, auxiliary code, figures, and embedded comments. In the absence of sanitization, submissions may disclose sensitive information that adversaries can harvest using open-source intelligence. In this work, we present the first large-scale security audit of preprint archives, analyzing more than 1.2 TB of source data from 100,000 arXiv submissions. We introduce LaTeXpOsEd, a four-stage framework that integrates pattern matching, logical filtering, traditional harvesting techniques, and large language models (LLMs) to uncover hidden disclosures within non-referenced files and LaTeX comments. To evaluate LLMs' secret-detection capabilities, we introduce LLMSec-DB, a benchmark on which we tested 25 state-of-the-art models. Our analysis uncovered thousands of PII leaks, GPS-tagged EXIF files, publicly available Google Drive and Dropbox folders, editable private SharePoint links, exposed GitHub and Google credentials, and cloud API keys. We also uncovered confidential author communications, internal disagreements, and conference submission credentials, exposing information that poses serious reputational risks to both researchers and institutions. We urge the research community and repository operators to take immediate action to close these hidden security gaps. To support open science, we release all scripts and methods from this study but withhold sensitive findings that could be misused, in line with ethical principles. The source code and related material are available at the project website https://github.com/LaTeXpOsEd
- North America > United States (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Asia > China (0.04)
- (5 more...)
RobustFlow: Towards Robust Agentic Workflow Generation
Xu, Shengxiang, Zhang, Jiayi, Di, Shimin, Luo, Yuyu, Yao, Liang, Liu, Hanmo, Zhu, Jia, Liu, Fan, Zhang, Min-Ling
The automated generation of agentic workflows is a promising frontier for enabling large language models (LLMs) to solve complex tasks. However, our investigation reveals that the robustness of agentic workflow remains a critical, unaddressed challenge. Current methods often generate wildly inconsistent workflows when provided with instructions that are semantically identical but differently phrased. This brittleness severely undermines their reliability and trustworthiness for real-world applications. To quantitatively diagnose this instability, we propose metrics based on nodal and topological similarity to evaluate workflow consistency against common semantic variations such as paraphrasing and noise injection. Subsequently, we further propose a novel training framework, RobustFlow, that leverages preference optimization to teach models invariance to instruction variations. By training on sets of synonymous task descriptions, RobustFlow boosts workflow robustness scores to 70\% - 90\%, which is a substantial improvement over existing approaches. The code is publicly available at https://github.com/DEFENSE-SEU/RobustFlow.
- Workflow (1.00)
- Research Report (1.00)
A Graph-Based Approach to Alert Contextualisation in Security Operations Centres
Eckhoff, Magnus Wiik, Flydal, Peter Marius, Peters, Siem, Eian, Martin, Halvorsen, Jonas, Mavroeidis, Vasileios, Grov, Gudmund
Interpreting the massive volume of security alerts is a significant challenge in Security Operations Centres (SOCs). Effective contextualisation is important, enabling quick distinction between genuine threats and benign activity to prioritise what needs further analysis. This paper proposes a graph-based approach to enhance alert contextualisation in a SOC by aggregating alerts into graph-based alert groups, where nodes represent alerts and edges denote relationships within defined time-windows. By grouping related alerts, we enable analysis at a higher abstraction level, capturing attack steps more effectively than individual alerts. Furthermore, to show that our format is well suited for downstream machine learning methods, we employ Graph Matching Networks (GMNs) to correlate incoming alert groups with historical incidents, providing analysts with additional insights.
- Asia > Nepal (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Communications > Collaboration (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
DDoS Attacks in Cloud Computing: Detection and Prevention
Ahmad, Zain, Ahmad, Musab, Ahmad, Bilal
DDoS attacks are one of the most prevalent and harmful cybersecurity threats faced by organizations and individuals today. In recent years, the complexity and frequency of DDoS attacks have increased significantly, making it challenging to detect and mitigate them effectively. The study analyzes various types of DDoS attacks, including volumetric, protocol, and application layer attacks, and discusses the characteristics, impact, and potential targets of each type. It also examines the existing techniques used for DDoS attack detection, such as packet filtering, intrusion detection systems, and machine learning-based approaches, and their strengths and limitations. Moreover, the study explores the prevention techniques employed to mitigate DDoS attacks, such as firewalls, rate limiting , CPP and ELD mechanism. It evaluates the effectiveness of each approach and its suitability for different types of attacks and environments. In conclusion, this study provides a comprehensive overview of the different types of DDoS attacks, their detection, and prevention techniques. It aims to provide insights and guidelines for organizations and individuals to enhance their cybersecurity posture and protect against DDoS attacks.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.74)